Enforcing the rules

Quick demo in preparation for the Indonesia meeting.

So far we really had either agents that follow the rules strictly and never cheat (the default approach) or decide individually whether legality is important or not (the erotetic approach). Here I try a simpler approach where agents aren’t forced to follow the rules, but will do so if they think breaking the law doesn’t pay.

You want to place an MPA. If enforcement is perfect (100% probability of being caught in the MPA and high fines) you get the following dynamic:

If enforcement is null (0% probability of catching MPA transgressors) you get agents just ignoring the rules entirely:

If enforcement is mediocre (15% chance of catching transgressors for each hour they tow) you tend to get agents avoiding the MPA initially but every now and then a few transgressors getting in (and if not caught, they tend to be imitated by others).

This is always driven by explore exploit where they learn from fines whether to break the law or not.

Optimal Enforcement

Of course you can always look for the best MPA possible. If enforcement is 100% and you are trying to optimize the profitability of the fleet over 20 simulated years, you will find the following optimal MPA:

But now imagine that enforcement is expensive, so that really you want to maximize: \[ \text{Average 20 years profits} - p * 10000000 - \text{# of protected cells} * 10000 \] Where \(p\) is the probability of catching transgressors per hour in the MPA.

It turns out that in this case the optimizer builds a larger MPA with high but not perfect probability of catching \(p \approx .7\). The MPA is larger even though it’s not free anymore because the optimizer figured out some cheating will occur and it needs to expand the protected area to be “safe”.

Cool directions

It’d be nice to model enforcement explicitly with patrols or satellites and so on.